我们考虑在以$ s $状态的地平线$ h $和$ a $ ACTIVE的偶发性,有限的,依赖于阶段的马尔可夫决策过程的环境中进行强化学习。代理商的性能是在与环境互动以$ t $插件互动后的遗憾来衡量的。我们提出了一种乐观的后验抽样算法(OPSRL),这是一种简单的后验抽样变体,仅需要许多后样品对数,$ h $,$ s $,$ a $和$ t $ a $ h $ s $ s $ a $ a $和$ t $一对。对于OPSRL,我们保证最多可容纳订单的高概率遗憾,$ \ wideTilde {\ mathcal {o}}}(\ sqrt {h^3sat})$忽略$ \ text {poly} \ log(hsat)$项。新型的新型技术成分是线性形式的新型抗浓缩不等式,可能具有独立感兴趣。具体而言,我们将Alfers and Dinges [1984]的Beta分布的基于正常近似的下限扩展到Dirichlet分布。我们的界限匹配订单$ \ omega(\ sqrt {h^3sat})$的下限,从而回答了Agrawal和Jia [2017b]在情节环境中提出的空旷问题。
translated by 谷歌翻译
本文提供了具有固定步骤大小的线性随机近似(LSA)算法的有限时间分析,这是统计和机器学习中的核心方法。 LSA用于计算$ d $ - 二维线性系统的近似解决方案$ \ bar {\ mathbf {a}}} \ theta = \ bar {\ mathbf {b}} $ a}},\ bar {\ mathbf {b}})$只能通过(渐近)无偏见的观察来估算$ \ {(\ m athbf {a}(z_n),\ mathbf {b} {n \ in \ mathbb {n}} $。我们在这里考虑$ \ {z_n \} _ {n \ in \ mathbb {n}} $是i.i.d.序列或统一的几何千古马尔可夫链,并得出了$ p $ - 大小写的不等式和高概率界限,用于LSA及其polyak-ruppert平均版本定义的迭代。更确切地说,我们建立订单$(p \ alpha t _ {\ pereratatorName {mix}}}))^{1/2} d^{1/p} $在$ p $ - LSA的最后一个迭代的$ p $ - 。在此公式中,$ \ alpha $是该过程的步骤大小,$ t _ {\ operatatorName {mix}} $是基础链的混合时间($ t _ {\ operatotorname {mix {mix}} = 1 $ in I.I.D.设置中的1 $ )。然后,我们证明了迭代的polyak-ruppert平均序列上的有限时间实例依赖性边界。这些结果是明确的,从某种意义上说,我们获得的领先术语匹配局部渐近minimax限制,包括对参数$(d,t _ {\ operatorname {mix}})$的紧密依赖性在更高的术语中。
translated by 谷歌翻译
This paper investigates the approximation properties of deep neural networks with piecewise-polynomial activation functions. We derive the required depth, width, and sparsity of a deep neural network to approximate any H\"{o}lder smooth function up to a given approximation error in H\"{o}lder norms in such a way that all weights of this neural network are bounded by $1$. The latter feature is essential to control generalization errors in many statistical and machine learning applications.
translated by 谷歌翻译
我们提出了在表格,依赖阶段的,情节的马尔可夫决策过程中使用贝叶斯-UCBVI算法进行增强学习的:Kaufmann等人的贝叶斯-UCB算法的自然扩展。 (2012年)用于多军匪徒。我们的方法将Q值函数后部的分位数用作最佳Q值函数上的上限。对于贝叶斯-UCBVI,我们证明了一个遗憾的是$ \ wideTilde {o}(\ sqrt {h^3sat})$,其中$ h $是一集的长度,$ s $是$ s $的数量,$ a $ a $动作数量,$ t $情节数,与$ \ omega(\ sqrt {h^3sat})$符合poly-$ \ $ \ log $ enter $ h,s,s,a,a,a,a,a ,适用于足够大的$ t $的t $。据我们所知,这是第一种获得对地平线$ h $(和$ s $)的最佳依赖性的算法,而无需涉及伯恩斯坦的奖金或噪音。对于我们的分析而言,至关重要的是一种新的细粒抗浓缩,以具有独立感兴趣的加权dirichlet总和。然后,我们解释了如何轻松地将贝叶斯-UCBVI延伸到表格环境之外,从而在我们的算法和贝叶斯引导之间表现出牢固的联系(Rubin,1981)。
translated by 谷歌翻译
我们开发了一个探索漏洞利用马尔可夫链Monte Carlo算法($ \ OperatorName {ex ^ 2mcmc} $),它结合了多个全局提议和本地移动。所提出的方法是巨大的平行化和极其计算的高效。我们证明$ \ operatorname {ex ^ 2mcmc} $下的$ v $ v $ -unique几何ergodicity在现实条件下,并计算混合速率的显式界限,显示多个全局移动带来的改进。我们展示$ \ operatorname {ex ^ 2mcmc} $允许通过提出依赖全局移动的新方法进行微调剥削(本地移动)和探索(全球移动)。最后,我们开发了一个自适应方案,$ \ OperatorName {Flex ^ 2mcmc} $,它学习使用归一化流的全局动作的分布。我们说明了许多经典采样基准测试的$ \ OperatorName {ex ^ 2mccmc} $及其自适应版本的效率。我们还表明,这些算法提高了对基于能量的模型的抽样GAN的质量。
translated by 谷歌翻译
在这项工作中,我们对香草生成的对抗网络(GAN)的非渐近性质进行了彻底的研究。We derive theoretical guarantees for the density estimation with GANs under a proper choice of the deep neural networks classes representing generators and discriminators.特别是,我们证明了由此产生的估计会聚到真实密度$ \ mathsf {p} ^ * $以jensen-shannon(js)以$(\ log {n} / n)^ {2 \Beta /(2 \ beta + d)} $ why $ n $是样本大小和$ \ beta $ commentines $ \ mathsf {p} ^ * $的平滑度。据我们所知,这是使用Vanilla Gans的浓度估计的文献中的第一个结果,这些融合率比N ^ { - 1/2} $更快地在政权$ \ beta> D / 2 $中。此外,我们表明所获得的速率是考虑的密度类别的最低限度最佳(最高因子因子)。
translated by 谷歌翻译
Many challenging reinforcement learning (RL) problems require designing a distribution of tasks that can be applied to train effective policies. This distribution of tasks can be specified by the curriculum. A curriculum is meant to improve the results of learning and accelerate it. We introduce Success Induced Task Prioritization (SITP), a framework for automatic curriculum learning, where a task sequence is created based on the success rate of each task. In this setting, each task is an algorithmically created environment instance with a unique configuration. The algorithm selects the order of tasks that provide the fastest learning for agents. The probability of selecting any of the tasks for the next stage of learning is determined by evaluating its performance score in previous stages. Experiments were carried out in the Partially Observable Grid Environment for Multiple Agents (POGEMA) and Procgen benchmark. We demonstrate that SITP matches or surpasses the results of other curriculum design methods. Our method can be implemented with handful of minor modifications to any standard RL framework and provides useful prioritization with minimal computational overhead.
translated by 谷歌翻译
The task of video prediction and generation is known to be notoriously difficult, with the research in this area largely limited to short-term predictions. Though plagued with noise and stochasticity, videos consist of features that are organised in a spatiotemporal hierarchy, different features possessing different temporal dynamics. In this paper, we introduce Dynamic Latent Hierarchy (DLH) -- a deep hierarchical latent model that represents videos as a hierarchy of latent states that evolve over separate and fluid timescales. Each latent state is a mixture distribution with two components, representing the immediate past and the predicted future, causing the model to learn transitions only between sufficiently dissimilar states, while clustering temporally persistent states closer together. Using this unique property, DLH naturally discovers the spatiotemporal structure of a dataset and learns disentangled representations across its hierarchy. We hypothesise that this simplifies the task of modeling temporal dynamics of a video, improves the learning of long-term dependencies, and reduces error accumulation. As evidence, we demonstrate that DLH outperforms state-of-the-art benchmarks in video prediction, is able to better represent stochasticity, as well as to dynamically adjust its hierarchical and temporal structure. Our paper shows, among other things, how progress in representation learning can translate into progress in prediction tasks.
translated by 谷歌翻译
Determining and predicting reservoir formation properties for newly drilled wells represents a significant challenge. One of the variations of these properties evaluation is well-interval similarity. Many methodologies for similarity learning exist: from rule-based approaches to deep neural networks. Recently, articles adopted, e.g. recurrent neural networks to build a similarity model as we deal with sequential data. Such an approach suffers from short-term memory, as it pays more attention to the end of a sequence. Neural network with Transformer architecture instead cast their attention over all sequences to make a decision. To make them more efficient in terms of computational time, we introduce a limited attention mechanism similar to Informer and Performer architectures. We conduct experiments on open datasets with more than 20 wells making our experiments reliable and suitable for industrial usage. The best results were obtained with our adaptation of the Informer variant of Transformer with ROC AUC 0.982. It outperforms classical approaches with ROC AUC 0.824, Recurrent neural networks with ROC AUC 0.934 and straightforward usage of Transformers with ROC AUC 0.961.
translated by 谷歌翻译
Recent increases in the computational demands of deep neural networks (DNNs) have sparked interest in efficient deep learning mechanisms, e.g., quantization or pruning. These mechanisms enable the construction of a small, efficient version of commercial-scale models with comparable accuracy, accelerating their deployment to resource-constrained devices. In this paper, we study the security considerations of publishing on-device variants of large-scale models. We first show that an adversary can exploit on-device models to make attacking the large models easier. In evaluations across 19 DNNs, by exploiting the published on-device models as a transfer prior, the adversarial vulnerability of the original commercial-scale models increases by up to 100x. We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase. Based on the insights, we propose a defense, $similarity$-$unpairing$, that fine-tunes on-device models with the objective of reducing the similarity. We evaluated our defense on all the 19 DNNs and found that it reduces the transferability up to 90% and the number of queries required by a factor of 10-100x. Our results suggest that further research is needed on the security (or even privacy) threats caused by publishing those efficient siblings.
translated by 谷歌翻译